Goto

Collaborating Authors

 Online








'Uncanny Valley': Donald Trump's Davos Drama, AI Midterms, and ChatGPT's Last Resort

WIRED

On this episode of, our hosts unpack the news from Davos, where Trump and major AI companies shared the stage at the World Economic Forum. This week, WIRED's Brian Barrett and Leah Feiger are joining the show as the new cohosts, alongside Zoë Schiffer. And our attention has been drawn to the drama going down in the quaint little town of Davos. Zoë tells us how at the World Economic Forum's event, major AI players like Anthropic have been the protagonists--sharing the spotlight with President Donald Trump, who insists on invading Greenland. Brian has been looking at how ICE activity is developing, and Leah is forcing us to think about this year's midterms because tech giants are already pouring millions into it. Plus, we dive into why OpenAI's decision to roll out ads in ChatGPT was a long time coming. Ads Are Coming to ChatGPT. Write to us at uncannyvalley@wired.com . You can always listen to this week's podcast through the audio player on this page, but if you want to subscribe for free to get every episode, here's how: If you're on an iPhone or iPad, open the app called Podcasts, or just tap this link . Today, we're starting a bit of a new chapter here on the show, and I want to introduce you to my brand new cohost, Brian Barrett, our executive editor here at WIRED, and Leah Feiger, our senior politics editor. So thrilled to be here. So longtime listeners know the show has taken on a bunch of different formats since it launched. We had the Gadget Lab days, the roundtable, news episodes. We really created this podcast because we want to bring you the best stories and the best takes about what's happening in tech and politics. That's all going to stay the same, but this time we're going to go even deeper. What trends you should be watching for, the news that's already happened or about to break, and how we are thinking about all of it.


Virtual Class Enhanced Discriminative Embedding Learning

Neural Information Processing Systems

Recently, learning discriminative features to improve the recognition performances gradually becomes the primary goal of deep learning, and numerous remarkable works have emerged. In this paper, we propose a novel yet extremely simple method Virtual Softmax to enhance the discriminative property of learned features by injecting a dynamic virtual negative class into the original softmax. Injecting virtual class aims to enlarge inter-class margin and compress intra-class distribution by strengthening the decision boundary constraint. Although it seems weird to optimize with this additional virtual class, we show that our method derives from an intuitive and clear motivation, and it indeed encourages the features to be more compact and separable. This paper empirically and experimentally demonstrates the superiority of Virtual Softmax, improving the performances on a variety of object classification and face verification tasks.


F-OAL: Forward-only Online Analytic Learning with Fast Training and Low Memory Footprint in Class Incremental Learning

Neural Information Processing Systems

Online Class Incremental Learning (OCIL) aims to train models incrementally, where data arrive in mini-batches, and previous data are not accessible. A major challenge in OCIL is Catastrophic Forgetting, i.e., the loss of previously learned knowledge. Among existing baselines, replay-based methods show competitive results but requires extra memory for storing exemplars, while exemplar-free (i.e., data need not be stored for replay in production) methods are resource friendly but often lack accuracy. In this paper, we propose an exemplar-free approach--Forward-only Online Analytic Learning (F-OAL). Unlike traditional methods, F-OAL does not rely on back-propagation and is forward-only, significantly reducing memory usage and computational time. Cooperating with a pre-trained frozen encoder with Feature Fusion, F-OAL only needs to update a linear classifier by recursive least square. This approach simultaneously achieves high accuracy and low resource consumption. Extensive experiments on bench mark datasets demonstrate F-OAL's robust performance in OCIL scenarios.


Implicit Generation and Modeling with Energy Based Models

Neural Information Processing Systems

Energy based models (EBMs) are appealing due to their generality and simplicity in likelihood modeling, but have been traditionally difficult to train. We present techniques to scale MCMC based EBM training on continuous neural networks, and we show its success on the high-dimensional data domains of ImageNet32x32, ImageNet128x128, CIFAR-10, and robotic hand trajectories, achieving better samples than other likelihood models and nearing the performance of contemporary GAN approaches, while covering all modes of the data. We highlight some unique capabilities of implicit generation such as compositionality and corrupt image reconstruction and inpainting. Finally, we show that EBMs are useful models across a wide variety of tasks, achieving state-of-the-art out-of-distribution classification, adversarially robust classification, state-of-the-art continual online class learning, and coherent long term predicted trajectory rollouts.